1,568 research outputs found

    An Analysis of Coordinated Responding

    Get PDF
    Experimental analyses of coordinated responding (i.e., cooperation) have been derived from a procedure described by Skinner (1962) in which reinforcers were delivered to a pair of subjects (a dyad) if both responded within a short interval, thus satisfying a mutual-reinforcement contingency. Although it has been suggested that this contingency enhances rates of temporally coordinated responding, limitations of past experiments have raised questions concerning this conclusion. The present experiments assessed three of those limitations by holding the schedule of reinforcement (Experiment 1: fixed-ratio 1; Experiment 2; variable-interval 20 s) constant (1) across phases and (2) between dyad members, and (3) varying the number of keys across which responses could be coordinated. Greater percentages of coordinated responding were observed under mutual- than under independent-reinforcement phases in most conditions. The one exception during the one-key condition of Experiment 1 appeared to be a consequence of variability during the independent-reinforcement phase. Furthermore, coordination percentages decreased systematically with increasing response options. The present results thus confirm that mutual-reinforcement contingencies induce higher rates of temporally coordinated responding than independent-reinforcement contingencies. The results further indicate that the effects of mutual-reinforcement contingences can be influenced by the environmental context in which those contingencies operate

    An Analysis of the Extinction-Induced Response Burst

    Get PDF
    Although the extinction burst is a frequently reported generative effect of extinction, there are few experimental analyses of the phenomenon. The purpose of the present series of experiments was to examine the occurrence, time course, and repeatability of extinction bursts. Six experimentally naive pigeons were exposed to at least five cycles of 5-sessions block of baseline followed by 8-session blocks of extinction. Depending on the condition, baseline sessions were either a fixed-ratio (FR) or variable-ratio (VR) schedule, and transitions from the last baseline session in each cycle to the first extinction session were conducted either between or within sessions. Within a block, subsequent extinction sessions were in effect throughout the session. There was not a single instance of an extinction burst when whole-session response rates were considered. Restricting the analysis to the first minute of an extinction session sometimes revealed a burst, most often during the first extinction session of a block, although this finding was not consistent. The frequency and magnitude of the extinction burst differed across exposures to extinction both across and within pigeons. Additionally, details of how the burst was measured (i.e. the level of analysis and definition of the phenomenon) influenced the occurrence and dimensions of the extinction burst. The results of the three experiments suggest that the way in which extinction is implemented and how the burst is defined influence whether or not a burst-like increase in responding is observed at the onset of extinction. Under the best of conditions, the extinction burst does not appear to be a reliable generative effect of extinction

    Anechoic audio and 3D-video content database of small ensemble performances for virtual concerts

    Get PDF
    International audienceThis paper presents the details related to the creation of a public database of anechoic audio and 3D-video recordings of several small music ensemble performances. Musical extracts range from baroque to jazz music. This work aims at extending the already available public databases of anechoic stimuli, providing the community with flexible audiovisual content for virtual acoustic simulations. For each piece of music, musicians were first close-mic recorded together to provide an audio performance reference. This recording was followed by individual instrument retake recordings, while listening to the reference recording, to achieve the best audio separation between instruments. In parallel, 3D-video content was recorded for each musician, employing a multiple Kinect 2 RGB-Depth sensors system, allowing for the generation and easy manipulation of 3D point-clouds. Details of the choice of musical pieces, recording procedure, and technical details on the system architecture including post-processing treatments to render the stimuli in immersive audiovisual environments are provided

    Analysis of head rotation trajectories during a sound localization task

    Get PDF
    International audienceDynamic changes of the Head-Related Transfer Function renderings as a function of head movement have been shown to be an important cue in sound localization. To investigate the cognitive process of dynamic sound localization, quantification of the characteristics of head movements is needed. In this study, trajectories of head rotation in a sound localization task were measured and analyzed. Listeners were asked to orient themselves towards the direction of active sound source via localization, being one of five loudspeakers located at 30 • intervals in the horizontal plane. A 1 s pink noise burst stimulus was emitted from different speakers in random order. The range of expected head rotations (EHR) for a given stimulus were, therefore, from 30 • to 120 •. Head orientation was measured with a motion capture system (yaw, pitch, and roll). Analysis examined angular velocity, overshoot, and reaction time (RT). Results show that angular velocity increased as EHR increased. No relationship between overshoot and EHR was observed. RT was almost constant (≈260 ms) regardless of EHR. This may suggest that dynamic sound localization studies could be difficult for a short stimulus with duration less than 260 ms

    Numerical simulation round robin of a coupled volume case: Preliminary results

    Get PDF
    International audienceThe advantages and limitations of most numerical methods in room acoustics have to date been primarily evaluated in single-volume room conditions, placing emphasis on early reflection components and the early part of the room acoustic impulse response. Few studies have examined the capabilities of simulations to model correctly the case of coupled volumes, where the late part of the impulse response is not a simple extension of the early part and needs to be accurately represented. This work presents preliminary results of a round robin study comparing numerical simulation results with coupled volume theory, using physical scale model measurements to define general parameters. Numerical methods included geometrical acoustic solutions, with image source method and ray/cone/path-tracing type approaches, and wave-based methods, comprising several FDTD implementations. A scale model was used to set the parameters of a statistical model to ensure a physically realistic configuration. Room model coordinates were specified. To avoid issues regarding variations in implementation of material and scattering behaviors across methods, the reverberation time of separate individual volumes was prescribed in the uncoupled condition. Volumes were then coupled and the results analyzed. The comparison is of a rather simplified room acoustic model, assuming homogeneous boundary conditions

    Introduction to the Special Issue on Teaching Inquiry (Part II): Implementing Inquiry

    Get PDF
    We provide an introduction to the special issue on Teaching Inquiry, through its motivation and themes, focusing here on Part II: Implementing Inquiry

    Introduction to the Special Issue on Teaching Inquiry (Part I): Illuminating Inquiry

    Get PDF
    We provide an introduction to the special issue on Teaching Inquiry, through its motivation and themes. We focus here on Part I: Illuminating Inquiry

    SMART-I²: A Spatial Multi-users Audio-visual Real Time Interactive Interface

    No full text
    International audienceThe SMART-I2 aims at creating a precise and coherent virtual environment by providing users with both audio and visual accurate localization cues. It is known that for audio rendering, Wave Field Synthesis, and for visual rendering, Tracked Stereoscopy, individually permit high quality spatial immersion within an extended space. The proposed system combines these two rendering approaches through the use of a large Multi-Actuator Panel used as both a loudspeaker array and as a projection screen, considerably reducing audio-visual incoherencies. The system performance has been confirmed by an objective validation of the audio interface and a perceptual evaluation of the audio-visual rendering
    corecore